176 research outputs found

    MRI Super-Resolution using Multi-Channel Total Variation

    Get PDF
    This paper presents a generative model for super-resolution in routine clinical magnetic resonance images (MRI), of arbitrary orientation and contrast. The model recasts the recovery of high resolution images as an inverse problem, in which a forward model simulates the slice-select profile of the MR scanner. The paper introduces a prior based on multi-channel total variation for MRI super-resolution. Bias-variance trade-off is handled by estimating hyper-parameters from the low resolution input scans. The model was validated on a large database of brain images. The validation showed that the model can improve brain segmentation, that it can recover anatomical information between images of different MR contrasts, and that it generalises well to the large variability present in MR images of different subjects. The implementation is freely available at https://github.com/brudfors/spm_superre

    An Algorithm for Learning Shape and Appearance Models without Annotations

    Get PDF
    This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. It is based on the idea that having a more accurate shape and appearance model leads to more accurate image registration, which in turn leads to a more accurate shape and appearance model. This leads naturally to an iterative scheme, which is based on a probabilistic generative model that is fit using Gauss-Newton updates within an EM-like framework. It was developed with the aim of enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications. The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle ``missing data'', which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1,900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets.Comment: 61 pages, 16 figures (some downsampled by a factor of 4), submitted to MedI

    Model-based multi-parameter mapping

    Get PDF
    Quantitative MR imaging is increasingly favoured for its richer information content and standardised measures. However, computing quantitative parameter maps, such as those encoding longitudinal relaxation rate (R1), apparent transverse relaxation rate (R2*) or magnetisation-transfer saturation (MTsat), involves inverting a highly non-linear function. Many methods for deriving parameter maps assume perfect measurements and do not consider how noise is propagated through the estimation procedure, resulting in needlessly noisy maps. Instead, we propose a probabilistic generative (forward) model of the entire dataset, which is formulated and inverted to jointly recover (log) parameter maps with a well-defined probabilistic interpretation (e.g., maximum likelihood or maximum a posteriori). The second order optimisation we propose for model fitting achieves rapid and stable convergence thanks to a novel approximate Hessian. We demonstrate the utility of our flexible framework in the context of recovering more accurate maps from data acquired using the popular multi-parameter mapping protocol. We also show how to incorporate a joint total variation prior to further decrease the noise in the maps, noting that the probabilistic formulation allows the uncertainty on the recovered parameter maps to be estimated. Our implementation uses a PyTorch backend and benefits from GPU acceleration. It is available at https://github.com/balbasty/nitorch.Comment: 20 pages, 6 figures, accepted at Medical Image Analysi

    Cellular morphometric analysis: from microscopic scale to whole mouse brains

    No full text
    International audienceIn neurodegenerative diseases, pathological aggregates disturb cell function and morphology. Quantifying these changes is of prime interest but raises experimental and computational challenges. In this context, whole-slide imaging (WSI) offers the unique opportunity to analyze whole mouse brain sections at the cellular level using a variety of histological markers. However, this technique generates terabytes of data which is difficult to fully analyze.We developed a novel method enabling: (1) to detect cells and pathological aggregates in WSI color images at the cellular level; (2) to quantify parameters of interest such as density, shape, location or color and (3) to integrate the information within quantitative and multi-scale heat maps.This original approach enables to extract pertinent information from high-resolution qualitative images and to dramatically reduce the amount of information to be processed. A supplementary step of this work consists in extending the analysis from brain sections to the entire brains reconstructed in 3D using our in-house software BrainVISA (http://brainvisa.info).From the generated 3D parametric maps, voxel-wise statistical studies can be performed to investigate cellular structural alterations without a priori. Furthermore, correlating 3D whole-brain parametric maps with in vivo imaging modalities (MRI, fMRI, PET, in vivo microscopy, etc.) will improve the understanding of the relationship between brain structure and function in disease

    Fitting Segmentation Networks on Varying Image Resolutions using Splatting

    Full text link
    Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step.Comment: Accepted for MIUA 202

    Fitting Segmentation Networks on Varying Image Resolutions using Splatting

    No full text
    Data used in image segmentation are not always defined on the same grid. This is particularly true for medical images, where the resolution, field-of-view and orientation can differ across channels and subjects. Images and labels are therefore commonly resampled onto the same grid, as a pre-processing step. However, the resampling operation introduces partial volume effects and blurring, thereby changing the effective resolution and reducing the contrast between structures. In this paper we propose a splat layer, which automatically handles resolution mismatches in the input data. This layer pushes each image onto a mean space where the forward pass is performed. As the splat operator is the adjoint to the resampling operator, the mean-space prediction can be pulled back to the native label space, where the loss function is computed. Thus, the need for explicit resolution adjustment using interpolation is removed. We show on two publicly available datasets, with simulated and real multi-modal magnetic resonance images, that this model improves segmentation results compared to resampling as a pre-processing step

    Robust supervised segmentation of neuropathology whole-slide microscopy images

    No full text
    International audienceAlzheimer's disease is characterized by brain pathological aggregates such as Aβ plaques and neurofibrillary tangles which trigger neuroinflammation and participate to neuronal loss. Quantification of these pathological markers on histological sections is widely performed to study the disease and to evaluate new therapies. However, segmentation of neuropathology images presents difficulties inherent to histology (presence of debris, tissue folding, non-specific staining) as well as specific challenges (sparse staining, irregular shape of the lesions). Here, we present a supervised classification approach for the robust pixel-level classification of large neuropathology whole slide images. We propose a weighted form of Random Forest in order to fit nonlinear decision boundaries that take into account class imbalance. Both color and texture descriptors were used as predictors and model selection was performed via a leave-one-image-out cross-validation scheme. Our method showed superior results compared to the current state of the art method when applied to the segmentation of Aβ plaques and neurofibrillary tangles in a human brain sample. Furthermore, using parallel computing, our approach easily scales-up to large gigabyte-sized images. To show this, we segmented a whole brain histology dataset of a mouse model of Alzheimer's disease. This demonstrates our method relevance as a routine tool for whole slide microscopy images analysis in clinical and preclinical research settings

    Automated cell individualization and counting in cerebral microscopic images

    No full text
    International audienceIn biomedical research, cell counting is important to assess physiological and pathophysiological information. However, the automated analysis of microscopic images of tissues remains extremely challenging. We propose an automated processing protocol for proper segmentation of individual cells in microscopic images. A Gaussian filter is applied to improve signal to noise ratio (SNR) then an original minmax method is proposed to produce an image in which information describing both cell centers (minima) and boundaries are enhanced. Finally, a contour-based model initialized from minima in the min-max cartography is carried out to achieve cell individualization. This method is evaluated on a NeuN-stained macaque brain section in sub-regions presenting various levels of fraction of neuron surface occupation. Comparison with several methods of reference demonstrates that the performances of our method are superior. A first application to the segmentation of neurons in the hippocampus illustrates the ability of our approach to deal with massive and complex data
    corecore